We work towards a formula to determine if a matrix is nonsingular
For a 1×1 matrix
(a) is trivially nonsingular iff a=0
For a 2×2 matrix
(acbd) is nonsingular iff ad−bc=0
For a 3×3 matrix
⎝⎜⎛adgbehcfi⎠⎟⎞ is nonsingular iff aei+bfg+cdh−hfa−idb−gec=0
We call the family of formulas a, ad−bc, etc for each n×n matrix.
The determinant function detn×n:Mn×n→R is defined for each n such that an n×n matrix T is nonsingular iff detn×n(T)=0
Based on the first three determinants, we extrapolate conditions that the determinant function must satisfy:
if T has rows ρ1,...,ρn
det(ρ1,...,kρi+ρj,...,ρn)=det(ρ1,...,ρj,...,ρn) for i=j
(row combination operations don't change the determinant)
det(ρ1,...,ρj,...,ρi,...,ρn)=−det(ρ1,...,ρi,...,ρj,...,ρn) for i=j
(swapping rows makes the determinant negative)
det(ρ1,...,kρi,...,ρn)=k⋅det(ρ1,...,ρi,...,ρn) for any scalar k (including k=0)
(multiplying a row by k, multiplies the determinant by k)
det(I)=1 for identity matrix I
We often write ∣T∣ instead of det(T)
Small Note
(2) is redundant because Tρi+ρj−ρj+ρiρi+ρj−ρiT^
This swaps the rows, and the first three operations don't change the determinant while the last negates it.
From above, we can derive these lemmas:
A matrix with two identical rows has a determinant of 0.
A matrix with a zero row has a determinant of 0.
A matrix is nonsingular iff its determinant is nonzero.
The determinant of an echelon form matrix is the product down its diagonal.
Proof
For the first, swap the two identical rows. By condition (2), the determinant is the opposite but the matrix is the same, so it must be 0.
For the second, multiply the zero row by 2. By condition (3), the determinant is doubled, but the matrix remains the same, so the determinant is the same. Thus it must be 0.
The third is by definition.
The fourth sentence has two cases: if it is singular, then it has a zero row, and contains a 0 in the diagonal. The determinant is 0 since it is singular, and equals the product down the diagonal.
If the echelon form matrix is nonsingular then none of the diagonal entries are 0. We can then use condition (3) to get ones in the diagonal: ∣∣∣∣∣∣∣∣∣∣t1,10⋮0t1,2t2,20⋯t1,nt2,ntn,n∣∣∣∣∣∣∣∣∣∣=t1,1⋅t2,2⋯tn,n⋅∣∣∣∣∣∣∣∣∣∣10⋮0t1,2/t1,110⋯t1,n/t1,1t2,n/t2,21∣∣∣∣∣∣∣∣∣∣
Then, clearing out the columns uses condition (1), so =t1,1⋅t2,2⋯tn,n⋅∣I∣=t1,1⋅t2,2⋯tn,n
So, the determinant is the product down the diagonal in this case as well.
With these rules, we can find the determinant using Gauss's Method
Example 13.1
Using Gauss's Method, find the determinant of the matrix ⎝⎜⎛12330−1−245⎠⎟⎞
For a 2×2 determinant, notice how the terms are diagonals of the matrix.
In a 3×3 matrix, the following mnemonics can be used
For larger matrices, use Gauss's Method
An upper triangular matrix is a square matrix with only 0s below the diagonal. ⎝⎜⎜⎜⎜⎛t1,10⋮0t1,2t2,2⋮0⋯⋯⋱⋯t1,nt2,n⋮tn,n⎠⎟⎟⎟⎟⎞
The determinant of an upper triangular matrix T is the product of the diagonals.
Proof
If the diagonal entries are all nonzero, the matrix is in echelon form, so the determinant is the product of the diagonals.
If ti,i=0 for some i∈{1,...,n}, we must prove that ∣T∣=0. This is true iff T is singular, so it suffices to show that the columns of T are linearly dependent. Consider the matrix formed by the first i columns. It has the form ⎝⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎛t1,10⋮00⋮0t1,2t2,2⋮00⋮0⋯⋯⋱⋯⋯⋱⋯t1,i−1t2,i−1⋮ti−1,i−10⋮0t1,it2,i⋮ti−1,10⋮0⎠⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎞
Only the first i−1 rows can be nonzero, therefore this matrix is singular. That implies the columns are linearly dependent, but the columns are a subset of the columns of T, so the columns of T must be linearly dependent.
Permutation Expansion
Existance and Uniqueness
The problem with using conditions to define a function is that we must verify that there is one and only one function that satisfies those conditions.
We do that by determining a well-defined formula.
First, prove its uniqueness:
For each n, if there is an n×n determinant function than it is unique.
Proof
Suppose there existed two functions det1,det2:Mn×n→R satisfying the four conditions. Given a square matrix M, we can fix some way to reduce it to echelon form, keeping track of sign changes and scaler factors, then multiplying down the diagonal at the end, we can see that both functions must return the same result. Since they return the same output for every function, they are the same function.
Let V be a vector space. A map f:Vn→R is multilinear if
f(ρ1,...,v+w,...,ρn)=f(ρ1,...,v,...,ρn)+f(ρ1,...,+w,...,ρn)
(the function splits addition one input at a time)
f(ρ1,...,kv,...,ρn)=k⋅f(ρ1,...,v,...,ρn)
(the function splits scalar multiples one input at a time)
Determinants are multilinear
Proof
The second of the two properties is simply condition (3) from above.
For the first condition, there are two conditions:
If the set of the other rows {ρ1,...,ρi−1,ρi+1,...,ρn} is linearly dependent, all three matrices are singular so we get the trivial 0=0.
Therefore, assume the set of other rows is linearly independent. Then we can add another vector to make a basis: {ρ1,...,ρi−1,β,ρi+1,...,ρn}
Then v and w can be expressed with respect to this basis and added: v=v1ρ1+⋯+vi−1ρi−1+viβ+vi+1ρi+1+⋯+vnρnw=w1ρ1+⋯+wi−1ρi−1+wiβ+wi+1ρi+1+⋯+wnρnv+w=(v1+w1)ρ1+⋯+(vi+wi)β+⋯+(vn+wn)ρn
Now substitute this into the left-hand side of property 1 det(ρ1,...,(v1+w1)ρ1+⋯+(vi+wi)β+⋯+(vn+wn)ρn,...,ρn)
From condition (1), the determinant doesn't change if we add −(vj+wj) times ρj, so doing that we can apply condition (3) to get det(ρ1,...,v+w,...,ρn)====det(ρ1,...,(vi+wi)β,...,ρn)(v+w)⋅det(ρ1,...,β,...,ρn)vi⋅det(ρ1,...,β,...,ρn)+wi⋅det(ρ1,...,β,...,ρn)det(ρ1,...,vi⋅β,...,ρn)+det(ρ1,...,wi⋅β,...,ρn)
Now add the vjρj's to the first and wjρj's to the second to recreate v and w, giving the desired expression.
Example 13.3
The determinant of (1324) can be split using multilinearity: ∣∣∣∣∣1324∣∣∣∣∣=∣∣∣∣∣0324∣∣∣∣∣+∣∣∣∣∣1304∣∣∣∣∣=∣∣∣∣∣1300∣∣∣∣∣+∣∣∣∣∣1004∣∣∣∣∣+∣∣∣∣∣0320∣∣∣∣∣+∣∣∣∣∣0024∣∣∣∣∣
The left and right matrices are 0 becuase the second row is a scalar multiple of the first row. =4⋅∣∣∣∣∣1001∣∣∣∣∣+2⋅3⋅∣∣∣∣∣0110∣∣∣∣∣
We discuss evaluating these matrices below.
Example 13.4
The determinant can be reduced to a sum of determinants where each row has one element from the original. ∣∣∣∣∣∣∣147258369∣∣∣∣∣∣∣=∣∣∣∣∣∣∣147000000∣∣∣∣∣∣∣+∣∣∣∣∣∣∣140008000∣∣∣∣∣∣∣+⋯+∣∣∣∣∣∣∣000008360∣∣∣∣∣∣∣+∣∣∣∣∣∣∣000000369∣∣∣∣∣∣∣
If any two entries came from the same column, then one row is a multiple of another so the determinant is zero. This results in 6 determinants: ∣∣∣∣∣∣∣147258369∣∣∣∣∣∣∣=∣∣∣∣∣∣∣100050009∣∣∣∣∣∣∣+∣∣∣∣∣∣∣100008060∣∣∣∣∣∣∣+∣∣∣∣∣∣∣040200009∣∣∣∣∣∣∣+∣∣∣∣∣∣∣007200060∣∣∣∣∣∣∣+∣∣∣∣∣∣∣040008300∣∣∣∣∣∣∣+∣∣∣∣∣∣∣007050300∣∣∣∣∣∣∣=45⋅∣∣∣∣∣∣∣100010001∣∣∣∣∣∣∣+48⋅∣∣∣∣∣∣∣100001010∣∣∣∣∣∣∣+72⋅∣∣∣∣∣∣∣010100001∣∣∣∣∣∣∣+84⋅∣∣∣∣∣∣∣001100010∣∣∣∣∣∣∣+96⋅∣∣∣∣∣∣∣010001100∣∣∣∣∣∣∣+105⋅∣∣∣∣∣∣∣001010100∣∣∣∣∣∣∣
We discuss evaluating these matrices below.
Permutation Matrices
A permutation matrix is a matrix where every entry is 0 except for a single 1 in each row and column.
Define an n-permutation as a function on the first n integers ϕ:{1,...,n}→{1,...,n} that is bijective.
In other words, each of 1,...,n in the output is associated with exactly one input.
Example 13.5
The 3-permutations are ϕ1={1,2,3}ϕ4={2,3,1}ϕ2={1,3,2}ϕ5={3,1,2}ϕ3={2,1,3}ϕ6={3,2,1}
We denote the row matrix with all 0s except for a 1 in entry j with ιj (e.g. the four-wide ι2=(0200)). With this, we notate a permutation matrix such that a ϕ=⟨ϕ(1),...,ϕ(n)⟩ is associated with the matrix whose rows are ιϕ(1),...,ιϕ(n)
For example, for the 4-permutation ϕ:⟨2,4,3,1⟩, the matrix associated with it is Pι=⎝⎜⎜⎜⎛ι2ι4ι3ι1⎠⎟⎟⎟⎞=⎝⎜⎜⎜⎛0001100000100100⎠⎟⎟⎟⎞
Now we can define the permutation expansion for determinants: ∣∣∣∣∣∣∣∣∣∣t1,1t2,1⋮tn,1t1,2t2,2⋮tn,2⋯⋯⋱⋯t1,nt2,n⋮tn,n∣∣∣∣∣∣∣∣∣∣=t1,ϕ1(1)⋅t2,ϕ1(2)⋯tn,ϕ1(n)∣Pϕ1∣+t1,ϕ2(1)⋅t2,ϕ2(2)⋯tn,ϕ2(n)∣Pϕ2∣⋮+t1,ϕk(1)⋅t2,ϕk(2)⋯tn,ϕk(n)∣Pϕk∣
where ϕ1,...,ϕk are the n-permutations
In summation notation, ∣T∣=permutations ϕ∑t1,ϕ1(1)⋅t2,ϕ(2)⋯tn,ϕ(n)∣Pϕ∣
the sum of all permutations of ϕ of the form t1,ϕ1(1)t2,ϕ(2)⋯tn,ϕ(n)∣Pϕ∣
Example 13.6
Consider a 2×2 matrix. There are two 2-permutations, ϕ1=⟨1,2⟩ and ϕ2=⟨2,1⟩. The associated permutation matrices are Pϕ1=(1001)Pϕ2=(0110)
So we get the expansion ∣∣∣∣∣acbd∣∣∣∣∣===ad⋅∣∣∣∣∣1001∣∣∣∣∣+bc⋅∣∣∣∣∣0110∣∣∣∣∣ad⋅(1)+bc⋅(−1)ad−bc
This gives the familiar formula for the determinant of the 2×2 matrix.
Note that ∣∣∣∣∣0110∣∣∣∣∣=−1 because it involves one row swap from I
Theorem: For each n there is a n×n determinant function. Theorem: The determinant of a matrix equals the determinant of its transpose.
This means statements about rows can be applied to the columns too; e.g. if row combinations don't change the determinant, column combinations don't either.
Also, a matrix is singular if two columns are equal, swapping columns changes the sign, and determinants are multilinear in their columns.
If T is a lower triangular matrix, the determinant is still the product of the diaognal entries.
This is true because TT has the same diagonals and is an upper triangular matrix, whose determinants are the product of the diagonal, and ∣T∣=∣TT∣
Existance of Determinants
In a permutation ϕ=⟨...,k,...,j,...⟩ or permutation matrix Pϕ=⎝⎜⎜⎜⎜⎜⎜⎜⎜⎛⋮ιk⋮ιj⋮⎠⎟⎟⎟⎟⎟⎟⎟⎟⎞
elements or rows such that k>j or ιk>ιj are in an inversion ϕ=⟨3,2,1⟩ has 3 inversions: 3>2, 2>1, 3>1
A row swap in a permutation matrix changes the parity of the number of inversions.
Proof
If the rows are adjacent, swapping the two won't affect the inversions of any other element, so it changes the number of inversions by 1.
If they are not, then swap them via a sequence, starting with bringing row k up ⎝⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎛⋮ιϕ(j)ιϕ(j+1)⋮ιϕ(k)⋮⎠⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎞ρk↔ρk−1ρk−1↔ρk−2⋯ρj+1↔ρj⎝⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎛⋮ιϕ(k)ιϕ(j)⋮ιϕ(k−1)⋮⎠⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎞
then moving row j down ρj+1↔ρj+2ρj+2↔ρj+3⋯ρk−1↔ρk⎝⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎛⋮ιϕ(k)ιϕ(j+1)⋮ιϕ(j)⋮⎠⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎞
The total number of swaps is (k−j)+(k−j−1)=2(k−j)−1, which is odd, so it changes the parity of the number of inversions
The signum of a permutation sgn(ϕ) is: sgn(ϕ)={−1+1if number of inversions is oddif number of inversions is even
If sgn(ϕ)=−1, it takes an odd number of swaps to take it back to identity, and if sgn(ϕ)=1, it takes an even number (should be pretty intuitive) ⟹∣Pϕ∣=sgn(ϕ) because a row swap changes the sign and ∣I∣=1
Thus, the permutation expansion becomes d(T)=permutations ϕ∑t1,ϕ1(1)⋅t1,ϕ(2)⋯tn,ϕ(n)⋅sgn(ϕ)
The signum function is clearly well-defined: just count the number of inversions.
So finally, we will show that this d(T) satisfies the conditions, proving that the determinant exists for all n.
Proof
Condition (4) is easy: for I, the summation is all 0 except for the permutation which gives the product down the diagonal, which is 1.
For condition (3), suppose TkρiT^ and consider d(T^) perm ϕ∑t^1,ϕ(1)⋯t^i,ϕ(i)⋯t^n,ϕ(n)sgn(ϕ)=perm ϕ∑t1,ϕ(1)⋯kti,ϕ(i)⋯tn,ϕ(n)sgn(ϕ)=kperm ϕ∑t1,ϕ(1)⋯kti,ϕ(i)⋯tn,ϕ(n)sgn(ϕ)=k⋅d(T)
leaves the desired equality
For condition (2), suppose Tρi↔ρjT^. We must show d(T^)=−d(T) t^i,ϕ(i) and t^j,ϕ(j) in each sum of the permutation expansion perm ϕ∑t^1,ϕ(1)⋯t^i,ϕ(i)⋯t^j,ϕ(j)⋯t^n,ϕ(n)sgn(ϕ)
is taken by swapping the rows in T, which as we have established before flips the sign. Thus, perm ϕ∑t^1,ϕ(1)⋯t^i,ϕ(i)⋯t^j,ϕ(j)⋯t^n,ϕ(n)⋅sgn(ϕ)=perm ϕ∑t1,ϕ(1)⋯ti,ϕ(i)⋯tj,ϕ(j)⋯tn,ϕ(n)⋅(−sgn(ϕ))=−d(T)
For condition (1), suppose Tkρi+ρjT^ d(T^)=perm ϕ∑t^1,ϕ(1)⋯ti,ϕ(i)⋯t^j,ϕ(j)⋯t^n,ϕ(n)sgn(ϕ)=perm ϕ∑t1,ϕ(1)⋯ti,ϕ(i)⋯(kti,ϕ(j)+tj,ϕ(j))⋯tn,ϕ(n)sgn(ϕ)
Distributing over addition and breaking into two summations: =k⋅perm ϕ∑t1,ϕ(1)⋯ti,ϕ(i)⋯ti,ϕ(j)⋯tn,ϕ(n)sgn(ϕ)+perm ϕ∑t1,ϕ(1)⋯ti,ϕ(i)⋯tj,ϕ(j)⋯tn,ϕ(n)sgn(ϕ)
See that the second term is d(T).
In the first term, the entry is ti,ϕ(j), not tj,ϕ(j). This sum represents the determinant of a matrix S that is equal to T except row j of S is row i of T, giving S two copies of row i. Thus, the first term is 0, making d(T^)=d(T) as desired.
Thus, we have that for any n, there exists a determinant function Mn×n→R
Finally, we can show ∣T∣=∣TT∣ using the expansion. ∣T∣=perm ϕ∑t1,ϕ(1)⋯ti,ϕ(i)⋯tj,ϕ(j)⋯tn,ϕ(n)sgn(ϕ)
In TT, the ta,ϕ(b)'s are all the same, since for ∣T∣ we have all ways to take one entry from each row and column of T and TT has all ways to take one entry from each column and row of T. So, the only difference is in sgn(ϕ), but sgn(ϕ)=sgn(ϕ−1), so they are the same.
Determinants as Size Functions
A box or parallelepiped in Rn formed by ⟨v1,...,vn⟩ is the set {t1v1+⋯+tnvn∣t1,...,tn∈[0,1]}
A parallelepiped in R2
The determinant of the 2×2 matrix ∣∣∣∣∣x1y1x2y2∣∣∣∣∣
represents the area of a parallelepiped in R2
Geometric Interpretations
Recall that the transpose does not change the determinant, so column operations are valid operations (just transposed row operations).
Also recall that scaling a column (or equivalently, a row) by k scales the whole determinant by k.
This makes sense, as it is analogous to scaling a side length of the box.
For the condition stating row combinations (or equivalent column combinations) does not change the determinant.
The base is the same, and the slant is different, but the height is the same, so the area remains the same.
Also, it is clear that the identity matrix has determinant of 1. A box made from (10) and (01) has area 1.
Swapping the vectors should negate the area. But area is positive, so the determinant sign of the determinant reflects the orientation or sense of the box.
This gives the right-hand rule in R3: do a "thumbs up" with your right hand and place it on the spanning plane so that your fingers curl from v1 to v2. Vectors on the side with the thumb define positive-sized boxes.
The determinant of the product of two matrices is the product of the determinants ∣TS∣=∣T∣∣S∣
Proof
First, suppose T is singular and has no inverse. If TS is invertible, then there exists some M such that (TS)M=T(SM)=I, meaning T must be invertible. The contrapositive says if T is not invertible then neither is TS, so ∣T∣∣S∣=∣TS∣=0.
If T is invertible, Then it is a product of elementary matrices T=E1E2⋯Er. Showing ∣ES∣=∣E∣∣S∣ for all matrices S and elementary matrices E proves the result.
For Mi(k), the matrix multiplying row i by k, we have ∣Mi(k)∣=k∣I∣=k from condition three, but also ∣Mi(k)S∣=kS also from condition three. The case for the other two elementary matrices is similar.
From above, we can derive the determinant of the inverse: 1=∣I∣=∣TT−1∣=∣T∣∣T−1∣⟹∣T−1∣=∣T∣1
The volume of a box is the absolute value of the determinant of a matrix with those vectors as columns.
Cramer's Rule
Recall that a linear system is equivalent to a linear vector equation. x1+2x2=63x1+x2=8⟺x1(13)+x2(21)=(68)
The geometric interpretation is to find what factors x1 and x2 must we scale the sides of the parallelogram so that it will fill the other vector.
Consider expanding only one side of the parallelogram, and compare the sizes of the shaded rectangles.
Together, we have x1∣∣∣∣∣1321∣∣∣∣∣=∣∣∣∣∣x1⋅1x1⋅321∣∣∣∣∣=∣∣∣∣∣x1⋅1+x2⋅2x1⋅3+x2⋅121∣∣∣∣∣=∣∣∣∣∣6821∣∣∣∣∣
So dividing both sides, x1=∣∣∣∣∣1321∣∣∣∣∣∣∣∣∣∣6821∣∣∣∣∣=−5−10=2
This gives a new way to solve systems of equations.
Cramer's Rule
Let A be an n×n matrix with nonzero determinant, let b be an n-tall column vector, consider the linear system Ax=b. For any i∈[1,...,n] let Bi be the matrix obtained by substituting b for column i in A. Then the value of the i-th unknown is xi=∣Bi∣/∣A∣.
If the matrix has a zero determinant then the system has no solution.
Example 13.7
Solve the following system of equations 2x1x1++x23x2x2−−x35x3===420
The corresponding matrix equation is ⎝⎜⎛210131−10−5⎠⎟⎞⎝⎜⎛x1x2x3⎠⎟⎞=⎝⎜⎛420⎠⎟⎞
We can find ∣A∣=∣∣∣∣∣∣∣210131−10−5∣∣∣∣∣∣∣=−26∣B1∣=∣∣∣∣∣∣∣420131−10−5∣∣∣∣∣∣∣=−52 ∣B2∣=∣∣∣∣∣∣∣210420−10−5∣∣∣∣∣∣∣=0∣B3∣=∣∣∣∣∣∣∣210131420∣∣∣∣∣∣∣=0
So the solutions are ⎝⎜⎛x1x2x3⎠⎟⎞=⎝⎜⎛∣B1∣/∣A∣∣B2∣/∣A∣∣B3∣/∣A∣⎠⎟⎞=⎝⎜⎛200⎠⎟⎞
Note that because this method requires taking the determinant, it is generally much slower to use Cramer's Rule for large matrices.
Laplace's Formula
Consider the permutation expansion ∣∣∣∣∣∣∣t1,1t2,1t3,1t1,2t2,2t3,2t1,3t2,3t3,3∣∣∣∣∣∣∣=t1,1t2,2t3,3∣∣∣∣∣∣∣100010001∣∣∣∣∣∣∣+t1,1t2,3t3,2∣∣∣∣∣∣∣100001010∣∣∣∣∣∣∣+t1,2t2,1t3,3∣∣∣∣∣∣∣010100001∣∣∣∣∣∣∣+t1,2t2,3t3,1∣∣∣∣∣∣∣001100010∣∣∣∣∣∣∣+t1,3t2,1t3,2∣∣∣∣∣∣∣010001100∣∣∣∣∣∣∣+t1,3t2,2t3,1∣∣∣∣∣∣∣001010100∣∣∣∣∣∣∣
Pick a row or column and factor out. Suppose we choose the first row ∣∣∣∣∣∣∣t1,1t2,1t3,1t1,2t2,2t3,2t1,3t2,3t3,3∣∣∣∣∣∣∣=t1,1⎣⎢⎡t2,2t3,3∣∣∣∣∣∣∣100010001∣∣∣∣∣∣∣+t2,3t3,2∣∣∣∣∣∣∣100001010∣∣∣∣∣∣∣⎦⎥⎤+t1,2⎣⎢⎡t2,1t3,3∣∣∣∣∣∣∣010100001∣∣∣∣∣∣∣+t2,3t3,1∣∣∣∣∣∣∣001100010∣∣∣∣∣∣∣⎦⎥⎤+t1,3⎣⎢⎡t2,1t3,2∣∣∣∣∣∣∣010001100∣∣∣∣∣∣∣+t2,2t3,1∣∣∣∣∣∣∣001010100∣∣∣∣∣∣∣⎦⎥⎤
Using the property that a row swap changes the sign, swap the rows so that they match the first. This takes one swap for row 2 and two for row 3 ∣∣∣∣∣∣∣t1,1t2,1t3,1t1,2t2,2t3,2t1,3t2,3t3,3∣∣∣∣∣∣∣=t1,1⎣⎢⎡t2,2t3,3∣∣∣∣∣∣∣100010001∣∣∣∣∣∣∣+t2,3t3,2∣∣∣∣∣∣∣100001010∣∣∣∣∣∣∣⎦⎥⎤−t1,2⎣⎢⎡t2,1t3,3∣∣∣∣∣∣∣100010001∣∣∣∣∣∣∣+t2,3t3,1∣∣∣∣∣∣∣100001010∣∣∣∣∣∣∣⎦⎥⎤+t1,3⎣⎢⎡t2,1t3,2∣∣∣∣∣∣∣100010001∣∣∣∣∣∣∣+t2,2t3,1∣∣∣∣∣∣∣100001010∣∣∣∣∣∣∣⎦⎥⎤
The terms in the square brackets simplify to a 2×2 determinants =t1,1∣∣∣∣∣t2,2t3,2t2,3t3,3∣∣∣∣∣−t1,2∣∣∣∣∣t2,1t3,1t2,3t3,3∣∣∣∣∣+t1,3∣∣∣∣∣t2,1t3,1t2,2t3,2∣∣∣∣∣
The i,jminor for an n×n matrix T is the (n−1)×(n−1) matrix formed by deleting row i and column j of T. The i,jcofactorTi,j of T is (−1)i+j times the determinant of the i,j minor of T
Example 13.8
For the matrix S=⎝⎜⎛3571402−1−3⎠⎟⎞
the 2,3 minor is (3710)
and the cofactor is S2,3=(−1)2+3∣∣∣∣∣3710∣∣∣∣∣=7
Laplace's formula finds the determinant of an n×n matrix by expanding by cofactors on any row i or column j ∣T∣=ti,1⋅Ti,1+ti,2⋅Ti,2+⋯+ti,n⋅Ti,n=t1,j⋅T1,j+t2,j⋅T2,j+⋯+tn,j⋅Tn,j
Example 13.9
Find the determinant ∣∣∣∣∣∣∣3571402−1−3∣∣∣∣∣∣∣
by expanding along the second row ∣∣∣∣∣∣∣3571402−1−3∣∣∣∣∣∣∣=−5∣∣∣∣∣102−3∣∣∣∣∣+4∣∣∣∣∣372−3∣∣∣∣∣−(−1)∣∣∣∣∣3710∣∣∣∣∣=−5(−3)+4(−23)+1(−7)=15−92−7=−84
The matrix adjoint (or the classical adjoint or adjugate) of a square matrix T is adj(T)=⎝⎜⎜⎜⎜⎛T1,1T1,2T1,nT2,1T2,2⋮T2,n⋯⋯⋯Tn,1Tn,2Tn,n⎠⎟⎟⎟⎟⎞
note that the row i column j entry is Tj,i, the j,i cofactor.
Example 13.10
For the same matrix S=⎝⎜⎛3571402−1−3⎠⎟⎞
the matrix adjoint is adj(S)=⎝⎜⎛−128−283−237−9137⎠⎟⎞
For a square matrix T, T⋅adj(T)=adj(T)⋅T=∣T∣⋅I. In other words, ⎝⎜⎜⎛t1,1⋮tn,1⋯⋯t1,n⋮tn,n⎠⎟⎟⎞⎝⎜⎜⎛T1,1⋮Tn,1⋯⋯T1,n⋮Tn,n⎠⎟⎟⎞=⎝⎜⎜⎜⎜⎛∣T∣000∣T∣⋮0⋯⋯⋯00∣T∣⎠⎟⎟⎟⎟⎞
Proof
Laplace's formula directly shows the diagonal entries are ∣T∣.
For any off-diagonal entry, the multiplication gives ti,1⋅Tk,1+ti,2⋅Tk,2+⋯+ti,n⋅Tk,n=0
becuase it represents the expansion along row k of a matrix with row i equal to row k, and a matrix with identical rows has determinant 0.